Action recognition models have achieved impressive results by incorporating scene-level annotations, such as objects, their relations, 3D structure, and more. However, obtaining annotations of scene structure for videos requires a significant amount of effort to gather and annotate, making these methods expensive to train. In contrast, synthetic datasets generated by graphics engines provide powerful alternatives for generating scene-level annotations across multiple tasks. In this work, we propose an approach to leverage synthetic scene data for improving video understanding. We present a multi-task prompt learning approach for video transformers, where a shared video transformer backbone is enhanced by a small set of specialized parameters for each task. Specifically, we add a set of ``task prompts'', each corresponding to a different task, and let each prompt predict task-related annotations. This design allows the model to capture information shared among synthetic scene tasks as well as information shared between synthetic scene tasks and a real video downstream task throughout the entire network. We refer to this approach as ``Promptonomy'', since the prompts model a task-related structure. We propose the PromptonomyViT model (PViT), a video transformer that incorporates various types of scene-level information from synthetic data using the ``Promptonomy'' approach. PViT shows strong performance improvements on multiple video understanding tasks and datasets.
translated by 谷歌翻译
基础模型(FMS)已证明了前所未有的功能,包括零拍学习,高保真数据合成和范围内的概括。但是,正如我们在本文中所显示的那样,FMS在专家任务上的开箱即用表现较差(例如,从语言查询中检索汽车手册技术插图),数据是看不见的,或者属于长尾的数据用于FM预训练的大型数据集的数据分布的一部分。这强调了在此类专家任务上明确评估和芬太尼FMS的必要性,这可以说是在实际现实世界中最重要的任务。在本文中,我们提出了围绕教授FMS了解技术文档的任务,通过学习将其图形插图与相应的语言描述相匹配的任务围绕着了解技术文档的任务。我们的FETA基准重点是公共汽车手册和销售目录手册中的文本对图像和图像到文本检索。 FETA配备了完全自动注释提取的程序(接受后将发布代码),从而使Feta轻松扩展到将来更多的文档类型和应用域。我们的自动注释导致自动性能指标显示,该指标与在人类策划注释中计算的指标一致(也发布)。我们提供多个基线和对FETA的流行FM的分析,从而导致一些有趣的发现,我们认为这对FM社区非常有价值,为现实世界中FMS应用于当前被标准基准的“忽视”的实践专家任务铺平了道路。在常见对象上。
translated by 谷歌翻译
该技术报告描述了无回报(PNR)时间定位挑战的EGO4D点的SVIT方法。我们提出了一个学习框架的结构(简称SVIT),该结构证明了仅在训练过程中仅可用的少量图像的结构才能改善视频模型。SVIT依靠两个关键见解。首先,由于图像和视频都包含结构化信息,因此我们用一组\ emph {对象令牌}丰富了一个可以在图像和视频中使用的\ emph {对象令牌}的模型。其次,视频中各个帧的场景表示应与静止图像的场景表示“对齐”。这是通过“框架夹一致性”损失实现的,该损失可确保图像和视频之间结构化信息的流动。SVIT在挑战测试集上获得了强劲的性能,并具有0.656绝对时间定位误差。
translated by 谷歌翻译
最近的动作识别模型通过整合对象,其位置和互动来取得令人印象深刻的结果。但是,为每个框架获得密集的结构化注释是乏味且耗时的,使这些方法的训练昂贵且可扩展性较低。同时,如果可以在感兴趣的域内或之外使用一小部分带注释的图像,我们如何将它们用于下游任务的视频?我们提出了一个学习框架的结构(简称SVIT),该结构证明了仅在训练过程中仅可用的少量图像的结构才能改善视频模型。 SVIT依靠两个关键见解。首先,由于图像和视频都包含结构化信息,因此我们用一组\ emph {对象令牌}丰富了一个可以在图像和视频中使用的\ emph {对象令牌}的模型。其次,视频中各个帧的场景表示应与静止图像的场景表示“对齐”。这是通过\ emph {frame-clip一致性}损失来实现的,该损失可确保图像和视频之间结构化信息的流动。我们探索场景结构的特定实例化,即\ emph {手对象图},由手和对象组成,其位置为节点,以及触点/no-contact的物理关系作为边缘。 SVIT在多个视频理解任务和数据集上显示出强烈的性能改进;它在EGO4D CVPR'22对象状态本地化挑战中赢得了第一名。对于代码和预算模型,请访问\ url {https://eladb3.github.io/svit/}的项目页面
translated by 谷歌翻译
概括跨越不同视觉域的学习表现的能力,例如在真正的照片,剪贴画,绘画和草图之间是人类视觉系统的基本容量。在本文中,不同于利用一些(或全部)源域监控的大多数跨域工作,我们接近一个相对较新的,非常实用的无监督域泛化(UDG)设置在既不源也不在源域中没有培训监督。我们的方法是基于跨域(BRAD)的桥梁​​的自我监督学习 - 辅助桥域附有一组从每个训练域的Brad将视觉(图像到图像)映射保留的一组语义。 BRAD和MAPPAPAPPED(端到端)与对比的自我监督表示模型一起学习(端到端),其用语义对齐每个域将每个域对齐,因此隐含地驱动所有域(见或看不见)语义上彼此对齐。在这项工作中,我们展示了如何使用边缘正则化的布拉德,我们的方法在多个基准和一系列任务中实现了显着的增益,包括UDG,少量UDA和跨多个域数据集的无监督概括(包括指向未经看明域的概念和课程)。
translated by 谷歌翻译
最近,视频变压器在视频理解方面取得了巨大成功,超过了CNN性能;然而,现有的视频变换器模型不会明确地模拟对象,尽管对象对于识别操作至关重要。在这项工作中,我们呈现对象区域视频变换器(Orvit),一个\ emph {对象为中心}方法,它与直接包含对象表示的块扩展视频变压器图层。关键的想法是从早期层开始融合以对象形式的表示,并将它们传播到变压器层中,从而影响整个网络的时空表示。我们的orvit块由两个对象级流组成:外观和动态。在外观流中,“对象区域关注”模块在修补程序上应用自我关注和\ emph {对象区域}。以这种方式,Visual对象区域与统一修补程序令牌交互,并通过上下文化对象信息来丰富它们。我们通过单独的“对象 - 动态模块”进一步模型对象动态,捕获轨迹交互,并显示如何集成两个流。我们在四个任务和五个数据集中评估我们的模型:在某事物中的某些问题和几次射击动作识别,以及在AVA上的某些时空动作检测,以及在某种东西上的标准动作识别 - 某种东西 - 东西,潜水48和EPIC-Kitchen100。我们在考虑的所有任务和数据集中展示了强大的性能改进,展示了将对象表示的模型的值集成到变压器体系结构中。对于代码和预用模型,请访问项目页面\ url {https://roeiherz.github.io/orvit/}
translated by 谷歌翻译
最近对物体检测的自我监督预防方法在很大程度上专注于预先绘制物体探测器的骨干,忽略了检测架构的关键部分。相反,我们介绍了DetReg,这是一种新的自我监督方法,用于预先列出整个对象检测网络,包括对象本地化和嵌入组件。在预先绘制期间,DetReg预测对象本地化以与无监督区域提议生成器匹配本地化,并同时将相应的特征嵌入与自我监控图像编码器的嵌入式对齐。我们使用DETR系列探测器实施DetReg,并显示它在Coco,Pascal VOC和空中客车船基准上的Fineetuned时改善了竞争性基线。在低数据制度中,包括半监督和几秒钟学习设置,DetReg建立了许多最先进的结果,例如,在Coco上,我们看到10次检测和+3.5的AP改进A +6.0 AP改进当培训只有1%的标签时。对于代码和预用模型,请访问https://amirbar.net/detreg的项目页面
translated by 谷歌翻译
Abstractive summarization has enjoyed renewed interest in recent years, thanks to pre-trained language models and the availability of large-scale datasets. Despite promising results, current models still suffer from generating factually inconsistent summaries, reducing their utility for real-world application. Several recent efforts attempt to address this by devising models that automatically detect factual inconsistencies in machine generated summaries. However, they focus exclusively on English, a language with abundant resources. In this work, we leverage factual consistency evaluation models to improve multilingual summarization. We explore two intuitive approaches to mitigate hallucinations based on the signal provided by a multilingual NLI model, namely data filtering and controlled generation. Experimental results in the 45 languages from the XLSum dataset show gains over strong baselines in both automatic and human evaluation.
translated by 谷歌翻译
A learned system uses machine learning (ML) internally to improve performance. We can expect such systems to be vulnerable to some adversarial-ML attacks. Often, the learned component is shared between mutually-distrusting users or processes, much like microarchitectural resources such as caches, potentially giving rise to highly-realistic attacker models. However, compared to attacks on other ML-based systems, attackers face a level of indirection as they cannot interact directly with the learned model. Additionally, the difference between the attack surface of learned and non-learned versions of the same system is often subtle. These factors obfuscate the de-facto risks that the incorporation of ML carries. We analyze the root causes of potentially-increased attack surface in learned systems and develop a framework for identifying vulnerabilities that stem from the use of ML. We apply our framework to a broad set of learned systems under active development. To empirically validate the many vulnerabilities surfaced by our framework, we choose 3 of them and implement and evaluate exploits against prominent learned-system instances. We show that the use of ML caused leakage of past queries in a database, enabled a poisoning attack that causes exponential memory blowup in an index structure and crashes it in seconds, and enabled index users to snoop on each others' key distributions by timing queries over their own keys. We find that adversarial ML is a universal threat against learned systems, point to open research gaps in our understanding of learned-systems security, and conclude by discussing mitigations, while noting that data leakage is inherent in systems whose learned component is shared between multiple parties.
translated by 谷歌翻译
Large language models (LLMs) have shown impressive results across a variety of tasks while requiring little or no direct supervision. Further, there is mounting evidence that LLMs may have potential in information-seeking scenarios. We believe the ability of an LLM to attribute the text that it generates is likely to be crucial for both system developers and users in this setting. We propose and study Attributed QA as a key first step in the development of attributed LLMs. We develop a reproducable evaluation framework for the task, using human annotations as a gold standard and a correlated automatic metric that we show is suitable for development settings. We describe and benchmark a broad set of architectures for the task. Our contributions give some concrete answers to two key questions (How to measure attribution?, and How well do current state-of-the-art methods perform on attribution?), and give some hints as to how to address a third key question (How to build LLMs with attribution?).
translated by 谷歌翻译